31 research outputs found

    Modelos de gerenciamento de enclaves para execução segura de componentes de software

    Get PDF
    Orientador: Carlos Alberto MazieroTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 04/12/2020Inclui referências: p. 106-115Área de concentração: Ciência da ComputaçãoResumo: A confidencialidade dos dados está se mostrando cada vez mais importante para os usuários de computadores, seja em um ambiente corporativo ou até mesmo em um ambiente doméstico. Atualmente, não somente dados sensíveis às empresas estão trafegando pela rede ou sendo manipulados pelos mais diversos programas de computador, mas também tem-se um intenso uso de aplicações para transações bancárias e outras aplicações de uso corriqueiro que manipulam dados sensíveis dos usuários, os quais devem ter sua confidencialidade e integridade garantidas. Nesse sentido, tem-se variadas soluções sendo propostas para manter a confidencialidade e integridade dos dados, dentre elas a arquitetura Intel SGX (Software Guard Extensions), a qual possui mecanismos para que as aplicações e os dados sejam encapsulados em uma área protegida da memória com acesso restrito, impossibilitando o acesso nessa região de memória a outras aplicações ou ao próprio sistema operacional. A utilização de tais mecanismos para prover a confidencialidade e integridade dos dados sensíveis da aplicação acarreta em um impacto de desempenho durante a sua execução, devido às restrições e verificações impostas pela arquitetura Intel SGX. O presente trabalho busca analisar os modelos de programação que são aplicados em soluções que utilizam a arquitetura Intel SGX e apresentar alternativas que buscam um uso mais eficiente dos recursos providos por tal arquitetura e também a redução do impacto de desempenho decorrente de sua utilização. Assim, são apresentados dois modelos de gerenciamento: (i) compartilhamento de enclaves; e (ii) pool de enclaves. Para a aplicação de tais modelos, é proposta uma arquitetura de um provedor de enclaves, que oferece um desacoplamento entre o enclave e a aplicação que o utiliza, permitindo aplicar os modelos de gerenciamento propostos e oferecer os recursos providos pelos enclaves às aplicações na forma de serviços. Um protótipo é construído para avaliar a arquitetura e modelos propostos, com os testes de desempenho demonstrando consideráveis reduções no impacto para requisição de enclaves, enquanto garante boa resposta para atender múltiplas requisições simultâneas. Assim, conclui-se que a utilização de modelos arquiteturais de software podem trazer benefícios no gerenciamento de recursos e ganho de desempenho na execução de aplicações seguras. Palavras-chave: Intel SGX, modelos de programação, arquitetura de software, padrões de projeto, desempenho, otimização de recursos.Abstract: Data confidentiality is becoming increasingly important to computer users, whether in a corporate environment or even in a home environment. Not only are business-sensitive data currently being trafficked across the network or being handled by a variety of software, but there is also an intense use of applications for banking transactions and other commonly used applications that manipulate sensitive user data, which must have their confidentiality and integrity guaranteed. In this sense, there are several solutions being proposed to maintain the confidentiality and integrity of the data, among them the Intel SGX (Software Guard Extensions) architecture, which has mechanisms to encapsulate applications and data in a protected area of memory having restricted access, making it impossible to access this region of memory to other applications or to the operating system. The use of such mechanisms to provide the confidentiality and integrity of sensitive data results in a performance impact during the application execution, due to the restrictions and verifications imposed by the Intel SGX architecture. The present work aims to analyze the programming models that are applied in solutions that use the Intel SGX architecture and present alternatives that seek a more efficient use of the resources provided by this architecture and also the reduction of the performance impact due to its use. Thus, two management models are presented: (i) enclave sharing; and (ii) enclave pool. In order to apply such models, an architecture of an enclave provider is proposed, which offers a decoupling between the enclave and the application that uses it, allowing to apply the proposed management models and offering the resources provided by the enclaves to the applications in "as a service" format. A prototype is built to evaluate the proposed architecture and models, with the performance tests demonstrating considerable reductions in the impact for enclave requests, while guaranteeing good response to attend simultaneous requests. Thus, it is concluded that the use of architectural software models can bring benefits in resource management and performance gains in the execution of secure applications. Keywords: Intel SGX, programming models, software architecture, design patterns, performance, resource optimization

    Sistema para análise de qualidade de energia baseado em software livre

    Get PDF
    This document describes the development and implementation of a power analyzer system that is composed by a signal acquisition hardware and a power analyzer software. The acquisition system contains an interface to connect personal computers and transfer the acquired data to be analized by the software. The software was developed using free tools and frameworks, what reduces the development cost. Furthermore, the latest de nitions for power computation are used, that are described by IEEE Std 1459-2010, for unbalanced and non-sinusoidal systems. ITo obtain results with better accuracy, it is used the Kalman lter to decompose the voltage and current signals into their fundamental and harmonic components. This document describe the development of hardware and software, including the internal software structure and implementation details of the power computations. Finally, simulation and experimental results are presented to validate the proposal. Then, these results are compared with theoretical results and the values obtained by the Fluke 434 Power Analyzer.CAPESEste trabalho apresenta o desenvolvimento e implementação de um sistema constitu ído por um hardware para aquisição de sinais e um software para a análise da qualidade de energia. O sistema de aquisição contém uma interface para conexão com computadores pessoais para o envio dos dados coletados para serem analisados via software. O software é desenvolvido utilizando-se ferramentas livres, reduzindo, assim, o custo de implementação do sistema. Além disso, são empregadas as mais recentes de nições para o cálculo de potências, presentes na Norma IEEE 1459-2010, a qual descreve os cálculos de potências para sistemas desbalanceados em regimes não senoidais. Para se obter uma maior precisão nos resultados, é utilizado o ltro de Kalman para a decomposição dos sinais de tensão e corrente em suas componentes fundamental e harmônicas, o qual apresenta melhores resultados em regimes transitórios quando comparados à FFT. O trabalho descreve todo o desenvolvimento de hardware e software, incluindo a estrutura interna do software e detalhes da implementação computacional dos cálculos. Por fim, são apresentados os resultados simulados e experimentais para a validação da proposta, os quais são comparados com resultados teóricos e os resultados obtidos pelo analisador de energia Fluke 434

    Secure Cloud Storage with Client-Side Encryption Using a Trusted Execution Environment

    Full text link
    With the evolution of computer systems, the amount of sensitive data to be stored as well as the number of threats on these data grow up, making the data confidentiality increasingly important to computer users. Currently, with devices always connected to the Internet, the use of cloud data storage services has become practical and common, allowing quick access to such data wherever the user is. Such practicality brings with it a concern, precisely the confidentiality of the data which is delivered to third parties for storage. In the home environment, disk encryption tools have gained special attention from users, being used on personal computers and also having native options in some smartphone operating systems. The present work uses the data sealing, feature provided by the Intel Software Guard Extensions (Intel SGX) technology, for file encryption. A virtual file system is created in which applications can store their data, keeping the security guarantees provided by the Intel SGX technology, before send the data to a storage provider. This way, even if the storage provider is compromised, the data are safe. To validate the proposal, the Cryptomator software, which is a free client-side encryption tool for cloud files, was integrated with an Intel SGX application (enclave) for data sealing. The results demonstrate that the solution is feasible, in terms of performance and security, and can be expanded and refined for practical use and integration with cloud synchronization services

    Molecular mechanisms of cell death: recommendations of the Nomenclature Committee on Cell Death 2018.

    Get PDF
    Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field

    Multiorgan MRI findings after hospitalisation with COVID-19 in the UK (C-MORE): a prospective, multicentre, observational cohort study

    Get PDF
    Introduction: The multiorgan impact of moderate to severe coronavirus infections in the post-acute phase is still poorly understood. We aimed to evaluate the excess burden of multiorgan abnormalities after hospitalisation with COVID-19, evaluate their determinants, and explore associations with patient-related outcome measures. Methods: In a prospective, UK-wide, multicentre MRI follow-up study (C-MORE), adults (aged ≥18 years) discharged from hospital following COVID-19 who were included in Tier 2 of the Post-hospitalisation COVID-19 study (PHOSP-COVID) and contemporary controls with no evidence of previous COVID-19 (SARS-CoV-2 nucleocapsid antibody negative) underwent multiorgan MRI (lungs, heart, brain, liver, and kidneys) with quantitative and qualitative assessment of images and clinical adjudication when relevant. Individuals with end-stage renal failure or contraindications to MRI were excluded. Participants also underwent detailed recording of symptoms, and physiological and biochemical tests. The primary outcome was the excess burden of multiorgan abnormalities (two or more organs) relative to controls, with further adjustments for potential confounders. The C-MORE study is ongoing and is registered with ClinicalTrials.gov, NCT04510025. Findings: Of 2710 participants in Tier 2 of PHOSP-COVID, 531 were recruited across 13 UK-wide C-MORE sites. After exclusions, 259 C-MORE patients (mean age 57 years [SD 12]; 158 [61%] male and 101 [39%] female) who were discharged from hospital with PCR-confirmed or clinically diagnosed COVID-19 between March 1, 2020, and Nov 1, 2021, and 52 non-COVID-19 controls from the community (mean age 49 years [SD 14]; 30 [58%] male and 22 [42%] female) were included in the analysis. Patients were assessed at a median of 5·0 months (IQR 4·2–6·3) after hospital discharge. Compared with non-COVID-19 controls, patients were older, living with more obesity, and had more comorbidities. Multiorgan abnormalities on MRI were more frequent in patients than in controls (157 [61%] of 259 vs 14 [27%] of 52; p<0·0001) and independently associated with COVID-19 status (odds ratio [OR] 2·9 [95% CI 1·5–5·8]; padjusted=0·0023) after adjusting for relevant confounders. Compared with controls, patients were more likely to have MRI evidence of lung abnormalities (p=0·0001; parenchymal abnormalities), brain abnormalities (p<0·0001; more white matter hyperintensities and regional brain volume reduction), and kidney abnormalities (p=0·014; lower medullary T1 and loss of corticomedullary differentiation), whereas cardiac and liver MRI abnormalities were similar between patients and controls. Patients with multiorgan abnormalities were older (difference in mean age 7 years [95% CI 4–10]; mean age of 59·8 years [SD 11·7] with multiorgan abnormalities vs mean age of 52·8 years [11·9] without multiorgan abnormalities; p<0·0001), more likely to have three or more comorbidities (OR 2·47 [1·32–4·82]; padjusted=0·0059), and more likely to have a more severe acute infection (acute CRP >5mg/L, OR 3·55 [1·23–11·88]; padjusted=0·025) than those without multiorgan abnormalities. Presence of lung MRI abnormalities was associated with a two-fold higher risk of chest tightness, and multiorgan MRI abnormalities were associated with severe and very severe persistent physical and mental health impairment (PHOSP-COVID symptom clusters) after hospitalisation. Interpretation: After hospitalisation for COVID-19, people are at risk of multiorgan abnormalities in the medium term. Our findings emphasise the need for proactive multidisciplinary care pathways, with the potential for imaging to guide surveillance frequency and therapeutic stratification

    Modelos de gerenciamento de enclaves para execução segura de componentes de software

    No full text
    Orientador: Carlos Alberto MazieroTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 04/12/2020Inclui referências: p. 106-115Área de concentração: Ciência da ComputaçãoResumo: A confidencialidade dos dados está se mostrando cada vez mais importante para os usuários de computadores, seja em um ambiente corporativo ou até mesmo em um ambiente doméstico. Atualmente, não somente dados sensíveis às empresas estão trafegando pela rede ou sendo manipulados pelos mais diversos programas de computador, mas também tem-se um intenso uso de aplicações para transações bancárias e outras aplicações de uso corriqueiro que manipulam dados sensíveis dos usuários, os quais devem ter sua confidencialidade e integridade garantidas. Nesse sentido, tem-se variadas soluções sendo propostas para manter a confidencialidade e integridade dos dados, dentre elas a arquitetura Intel SGX (Software Guard Extensions), a qual possui mecanismos para que as aplicações e os dados sejam encapsulados em uma área protegida da memória com acesso restrito, impossibilitando o acesso nessa região de memória a outras aplicações ou ao próprio sistema operacional. A utilização de tais mecanismos para prover a confidencialidade e integridade dos dados sensíveis da aplicação acarreta em um impacto de desempenho durante a sua execução, devido às restrições e verificações impostas pela arquitetura Intel SGX. O presente trabalho busca analisar os modelos de programação que são aplicados em soluções que utilizam a arquitetura Intel SGX e apresentar alternativas que buscam um uso mais eficiente dos recursos providos por tal arquitetura e também a redução do impacto de desempenho decorrente de sua utilização. Assim, são apresentados dois modelos de gerenciamento: (i) compartilhamento de enclaves; e (ii) pool de enclaves. Para a aplicação de tais modelos, é proposta uma arquitetura de um provedor de enclaves, que oferece um desacoplamento entre o enclave e a aplicação que o utiliza, permitindo aplicar os modelos de gerenciamento propostos e oferecer os recursos providos pelos enclaves às aplicações na forma de serviços. Um protótipo é construído para avaliar a arquitetura e modelos propostos, com os testes de desempenho demonstrando consideráveis reduções no impacto para requisição de enclaves, enquanto garante boa resposta para atender múltiplas requisições simultâneas. Assim, conclui-se que a utilização de modelos arquiteturais de software podem trazer benefícios no gerenciamento de recursos e ganho de desempenho na execução de aplicações seguras. Palavras-chave: Intel SGX, modelos de programação, arquitetura de software, padrões de projeto, desempenho, otimização de recursos.Abstract: Data confidentiality is becoming increasingly important to computer users, whether in a corporate environment or even in a home environment. Not only are business-sensitive data currently being trafficked across the network or being handled by a variety of software, but there is also an intense use of applications for banking transactions and other commonly used applications that manipulate sensitive user data, which must have their confidentiality and integrity guaranteed. In this sense, there are several solutions being proposed to maintain the confidentiality and integrity of the data, among them the Intel SGX (Software Guard Extensions) architecture, which has mechanisms to encapsulate applications and data in a protected area of memory having restricted access, making it impossible to access this region of memory to other applications or to the operating system. The use of such mechanisms to provide the confidentiality and integrity of sensitive data results in a performance impact during the application execution, due to the restrictions and verifications imposed by the Intel SGX architecture. The present work aims to analyze the programming models that are applied in solutions that use the Intel SGX architecture and present alternatives that seek a more efficient use of the resources provided by this architecture and also the reduction of the performance impact due to its use. Thus, two management models are presented: (i) enclave sharing; and (ii) enclave pool. In order to apply such models, an architecture of an enclave provider is proposed, which offers a decoupling between the enclave and the application that uses it, allowing to apply the proposed management models and offering the resources provided by the enclaves to the applications in "as a service" format. A prototype is built to evaluate the proposed architecture and models, with the performance tests demonstrating considerable reductions in the impact for enclave requests, while guaranteeing good response to attend simultaneous requests. Thus, it is concluded that the use of architectural software models can bring benefits in resource management and performance gains in the execution of secure applications. Keywords: Intel SGX, programming models, software architecture, design patterns, performance, resource optimization

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems that facilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment.This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing \sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions

    Search for ultrahigh energy neutrinos in highly inclined events at the Pierre Auger Observatory*

    No full text

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore